# MLX adaptation
Deepseek R1 0528 Qwen3 8B 4bit
MIT
This model is a 4-bit quantized version converted from DeepSeek-R1-0528-Qwen3-8B, optimized for the MLX framework and suitable for text generation tasks.
Large Language Model
D
mlx-community
924
1
Deepseek R1 0528 4bit
DeepSeek-R1-0528-4bit is a 4-bit quantized model converted from DeepSeek-R1-0528, optimized for the MLX framework.
Large Language Model
D
mlx-community
157
9
Qwen3 14B 4bit AWQ
Apache-2.0
Qwen3-14B-4bit-AWQ is an MLX-format model converted from Qwen/Qwen3-14B, using AWQ quantization technology to compress the model to 4bit, suitable for efficient inference on the MLX framework.
Large Language Model
Q
mlx-community
252
2
Qwen3 235B A22B 4bit
Apache-2.0
This model is a 4-bit quantized version of Qwen/Qwen3-235B-A22B converted to MLX format, suitable for text generation tasks.
Large Language Model
Q
mlx-community
974
6
Qwen3 8B 4bit
Apache-2.0
This is the 4-bit quantized version of the Qwen/Qwen3-8B model, converted to the MLX framework format, suitable for efficient inference on Apple silicon devices.
Large Language Model
Q
mlx-community
2,131
2
Qwen3 4B 4bit
Apache-2.0
Qwen3-4B-4bit is a 4-bit quantized version converted from Qwen/Qwen3-4B to the MLX format, designed for efficient operation on Apple chips.
Large Language Model
Q
mlx-community
7,400
6
Featured Recommended AI Models